4,957 research outputs found
Deterministic Time-Space Tradeoffs for k-SUM
Given a set of numbers, the -SUM problem asks for a subset of numbers
that sums to zero. When the numbers are integers, the time and space complexity
of -SUM is generally studied in the word-RAM model; when the numbers are
reals, the complexity is studied in the real-RAM model, and space is measured
by the number of reals held in memory at any point.
We present a time and space efficient deterministic self-reduction for the
-SUM problem which holds for both models, and has many interesting
consequences. To illustrate:
* -SUM is in deterministic time and space
. In general, any
polylogarithmic-time improvement over quadratic time for -SUM can be
converted into an algorithm with an identical time improvement but low space
complexity as well. * -SUM is in deterministic time and space
, derandomizing an algorithm of Wang.
* A popular conjecture states that 3-SUM requires time on the
word-RAM. We show that the 3-SUM Conjecture is in fact equivalent to the
(seemingly weaker) conjecture that every -space algorithm for
-SUM requires at least time on the word-RAM.
* For , -SUM is in deterministic time and
space
The Complexity of the k-means Method
The k-means method is a widely used technique for clustering points in Euclidean space. While it is extremely fast in practice, its worst-case running time is exponential in the number of data points. We prove that the k-means method can implicitly solve PSPACE-complete problems, providing a complexity-theoretic explanation for its worst-case running time. Our result parallels recent work on the complexity of the simplex method for linear programming
Breaking Cosmological Degeneracies in Galaxy Cluster Surveys with a Physical Model of Cluster Structure
Forthcoming large galaxy cluster surveys will yield tight constraints on
cosmological models. It has been shown that in an idealized survey, containing
> 10,000 clusters, statistical errors on dark energy and other cosmological
parameters will be at the percent level. It has also been shown that through
"self-calibration", parameters describing the mass-observable relation and
cosmology can be simultaneously determined, though at a loss in accuracy by
about an order of magnitude. Here we examine the utility of an alternative
approach of self-calibration, in which a parametrized ab-initio physical model
is used to compute cluster structure and the resulting mass-observable
relations. As an example, we use a modified-entropy ("pre-heating") model of
the intracluster medium, with the history and magnitude of entropy injection as
unknown input parameters. Using a Fisher matrix approach, we evaluate the
expected simultaneous statistical errors on cosmological and cluster model
parameters. We study two types of surveys, in which a comparable number of
clusters are identified either through their X-ray emission or through their
integrated Sunyaev-Zel'dovich (SZ) effect. We find that compared to a
phenomenological parametrization of the mass-observable relation, using our
physical model yields significantly tighter constraints in both surveys, and
offers substantially improved synergy when the two surveys are combined. These
results suggest that parametrized physical models of cluster structure will be
useful when extracting cosmological constraints from SZ and X-ray cluster
surveys. (abridged)Comment: 22 pages, 8 figures, accepted to Ap
Cosmological Simulations of the Preheating Scenario for Galaxy Cluster Formation: Comparison to Analytic Models and Observations
We perform a set of non--radiative cosmological simulations of a preheated
intracluster medium in which the entropy of the gas was uniformly boosted at
high redshift. The results of these simulations are used first to test the
current analytic techniques of preheating via entropy input in the smooth
accretion limit. When the unmodified profile is taken directly from
simulations, we find that this model is in excellent agreement with the results
of our simulations. This suggests that preheated efficiently smoothes the
accreted gas, and therefore a shift in the unmodified profile is a good
approximation even with a realistic accretion history. When we examine the
simulation results in detail, we do not find strong evidence for entropy
amplification, at least for the high-redshift preheating model adopted here. In
the second section of the paper, we compare the results of the preheating
simulations to recent observations. We show -- in agreement with previous work
-- that for a reasonable amount of preheating, a satisfactory match can be
found to the mass-temperature and luminosity-temperature relations. However --
as noted by previous authors -- we find that the entropy profiles of the
simulated groups are much too flat compared to observations. In particular,
while rich clusters converge on the adiabatic self--similar scaling at large
radius, no single value of the entropy input during preheating can
simultaneously reproduce both the core and outer entropy levels. As a result,
we confirm that the simple preheating scenario for galaxy cluster formation, in
which entropy is injected universally at high redshift, is inconsistent with
observations.Comment: 11 pages, 13 figures, accepted for publication in Ap
Caching with Reserves
Caching is among the most well-studied topics in algorithm design, in part because it is such a fundamental component of many computer systems. Much of traditional caching research studies cache management for a single-user or single-processor environment. In this paper, we propose two related generalizations of the classical caching problem that capture issues that arise in a multi-user or multi-processor environment. In the caching with reserves problem, a caching algorithm is required to maintain at least k_i pages belonging to user i in the cache at any time, for some given reserve capacities k_i. In the public-private caching problem, the cache of total size k is partitioned into subcaches, a private cache of size k_i for each user i and a shared public cache usable by any user. In both of these models, as in the classical caching framework, the objective of the algorithm is to dynamically maintain the cache so as to minimize the total number of cache misses.
We show that caching with reserves and public-private caching models are equivalent up to constant factors, and thus focus on the former. Unlike classical caching, both of these models turn out to be NP-hard even in the offline setting, where the page sequence is known in advance. For the offline setting, we design a 2-approximation algorithm, whose analysis carefully keeps track of a potential function to bound the cost. In the online setting, we first design an O(ln k)-competitive fractional algorithm using the primal-dual framework, and then show how to convert it online to a randomized integral algorithm with the same guarantee
- …